Spatial hearing loss, also known as spatial processing deficit, refers to a form of deafness that is an inability to use spatial cues, i.e. where a sound originates from in space, to understand speech in the presence of background noise. (Cameron & Dillon, 2007)/ [1][2][3]
Contents |
People with spatial hearing loss have difficulty processing speech that arrives from one direction while simultaneously filtering out noise arriving from other directions. Spatial hearing loss is not caused by peripheral hearing loss and is thought to occur in the auditory pathways of the brain. Research has shown spatial hearing loss to be a leading cause of central auditory processing disorder (CAPD) in children (Cameron & Dillon, 2008). Children with spatial hearing loss commonly present with difficulties understanding speech in the classroom.[1][2] Spatial hearing loss is found in most people over 60 years of age, and is independent of other types of age related hearing loss. [4] As with presbyacusis, spatial hearing loss varies with age. Through childhood and into adulthood it can be viewed as spatial hearing gain (with it becoming easier to hear speech in noise), and then with middle age and beyond the spatial hearing loss begins (with it becoming harder again to hear speech in noise).
Those with no spatial hearing loss are able to use the signals arriving at the two ears so that noise seems to originate from a different location to the speech being listened to. The central auditory processing of normal listeners is able to squelch out the noise and hear the speech through the use of signal phase and level differences. [5] Those with spatial hearing loss are unable to make use of the phase and level cues, and are therefore unable to squelch out the noise.
Many neuroscience studies have facilitated the development and refinement of a speech processing model. This model shows cooperation between the two hemispheres of the brain, with asymmetric inter-hemispheric and intrahemispheric connectivity consistent with the left hemisphere specialization for phonological processing.[6]
The corpus collosum (CC) is the major route of communication between the two hemispheres. At maturity it is a large mass of white matter and consists of bundles of fibres linking the white matter of the two cerebral hemispheres. Its caudal portion and splenium contain fibres that originate from the primary and second auditory cortices, and from other auditory responsive areas.[7] Transcallosal interhemispheric transfer of auditory information plays a significant role in spatial hearing functions that depend on binaural cues.[8] Various studies have shown that despite normal audiograms, children with known auditory interhemispheric transfer deficits have particular difficulty localizing sound and understanding speech in noise.[9]
The CC is relatively slow to mature with its size continuing to increase until the third decade of life. From this point it then slowly begins to shrink.[10] LiSN-S SRT scores (see below) show that the ability to understand speech in noisy environments develops with age, is beginning to be adult like by 14 years and starts to decline by 40 years of age.
Spatial hearing loss can be diagnosed using the Listening in Spatialized Noise – Sentences test (LiSN-S),[11] which was designed to assess the ability of children with central auditory processing disorder (CAPD) to understand speech in background noise. The LiSN-S allows audiologists to measure how well a person uses spatial and pitch information to understand speech in noise. Inability to use spatial information has been found to be a leading cause of CAPD in children and is referred to as Spatial Hearing Loss or spatial processing disorder. (Cameron & Dillon, 2008)/ [2]
Test participants repeat a series of target sentences which are presented simultaneously with competing speech. The listener's speech reception threshold (SRT) for target sentences is calculated using an adaptive procedure. The targets are perceived as coming from in front of the listener whereas the distracters vary according to where they are perceived spatially (either directly in front or either side of the listener). The vocal identity of the distracters also varies (either the same as, or different to, the speaker of the target sentences) (Cameron & Dillon, 2007).
Performance on the LISN-S is evaluated by comparing listeners' performances across four listening conditions, generating two SRT measures and three "advantage" measures. The advantage measures represent the benefit in dB gained when either talker, spatial, or both talker and spatial cues are available to the listener. The use of advantage measures minimizes the influence of higher order skills on test performance (Cameron & Dillon, 2008). This serves to control for the inevitable differences that exist between individuals in functions such as language or memory.
Dichotic listening tests can be used to measure the interhemispheric transfer of auditory information. Dichotic listening performance increases (and the right-ear advantage decreases) with the development of the CC, peaking before the third decade. During middle age and older the CC reduces in size and dichotic listening becomes worse, primarily in the left ear. Dichotic listening tests typically involve two different auditory stimuli (usually speech) presented simultaneously, one to each ear, using a set of headphones. Participants are asked to attend to one or (in a divided-attention test) both of the messages.[12]
Deafness Research UK, awarded annual the 2007 Pauline Ashley Prize to UK researcher Sam Irving, of the MRC Institute for Hearing Research in Nottingham. Mr Irving to work with a team led by M. Charles Liberman at the Eaton Peabody Lab at MIT/Harvard university. This will follow up work done in 2006, by scientists working in the Oxford Auditory Neuroscience Group, at Oxford in the UK.[13]
The research will compare the performance of ferrets which have had their olivocochlear bundle (OCB) surgically removed with normal ferrets in the "ring of sound" noise device. Their theory is that the OCB, which is a part of the brain that we know is a centre of feedback information being transmitted from the brain back to the ear is the part of the brain that is malfunctioning in some patients.[13]
A spatial hearing aid can provide a direction of sound detection source for victims of spatial hearing loss.[1] Simple amplification can also improves audibility and sonic localization.[2]
|